43 research outputs found

    On the cusp anomalous dimension in the ladder limit of N=4\mathcal N=4 SYM

    Get PDF
    We analyze the cusp anomalous dimension in the (leading) ladder limit of N=4\mathcal N=4 SYM and present new results for its higher-order perturbative expansion. We study two different limits with respect to the cusp angle ϕ\phi. The first is the light-like regime where x=eiϕ0x = e^{i\,\phi}\to 0. This limit is characterised by a non-trivial expansion of the cusp anomaly as a sum of powers of logx\log x, where the maximum exponent increases with the loop order. The coefficients of this expansion have remarkable transcendentality features and can be expressed by products of single zeta values. We show that the whole logarithmic expansion is fully captured by a solvable Woods-Saxon like one-dimensional potential. From the exact solution, we extract generating functions for the cusp anomaly as well as for the various specific transcendental structures appearing therein. The second limit that we discuss is the regime of small cusp angle. In this somewhat simpler case, we show how to organise the quantum mechanical perturbation theory in a novel efficient way by means of a suitable all-order Ansatz for the ground state of the associated Schr\"odinger problem. Our perturbative setup allows to systematically derive higher-order perturbative corrections in powers of the cusp angle as explicit non-perturbative functions of the effective coupling. This series approximation is compared with the numerical solution of the Schr\"odinger equation to show that we can achieve very good accuracy over the whole range of coupling and cusp angle. Our results have been obtained by relatively simple techniques. Nevertheless, they provide several non-trivial tests useful to check the application of Quantum Spectral Curve methods to the ladder approximation at non zero ϕ\phi, in the two limits we studied.Comment: 21 pages, 3 figure

    Dreaming neural networks: forgetting spurious memories and reinforcing pure ones

    Full text link
    The standard Hopfield model for associative neural networks accounts for biological Hebbian learning and acts as the harmonic oscillator for pattern recognition, however its maximal storage capacity is α0.14\alpha \sim 0.14, far from the theoretical bound for symmetric networks, i.e. α=1\alpha =1. Inspired by sleeping and dreaming mechanisms in mammal brains, we propose an extension of this model displaying the standard on-line (awake) learning mechanism (that allows the storage of external information in terms of patterns) and an off-line (sleep) unlearning&\&consolidating mechanism (that allows spurious-pattern removal and pure-pattern reinforcement): this obtained daily prescription is able to saturate the theoretical bound α=1\alpha=1, remaining also extremely robust against thermal noise. Both neural and synaptic features are analyzed both analytically and numerically. In particular, beyond obtaining a phase diagram for neural dynamics, we focus on synaptic plasticity and we give explicit prescriptions on the temporal evolution of the synaptic matrix. We analytically prove that our algorithm makes the Hebbian kernel converge with high probability to the projection matrix built over the pure stored patterns. Furthermore, we obtain a sharp and explicit estimate for the "sleep rate" in order to ensure such a convergence. Finally, we run extensive numerical simulations (mainly Monte Carlo sampling) to check the approximations underlying the analytical investigations (e.g., we developed the whole theory at the so called replica-symmetric level, as standard in the Amit-Gutfreund-Sompolinsky reference framework) and possible finite-size effects, finding overall full agreement with the theory.Comment: 31 pages, 12 figure

    Chiral trace relations in Ω\Omega-deformed N=2\mathcal N=2 theories

    Full text link
    We consider N=2\mathcal N=2 SU(2)SU(2) gauge theories in four dimensions (pure or mass deformed) and discuss the properties of the simplest chiral observables in the presence of a generic Ω\Omega-deformation. We compute them by equivariant localization and analyze the structure of the exact instanton corrections to the classical chiral ring relations. We predict exact relations valid at all instanton number among the traces Trφn\langle\text{Tr}\varphi^{n}\rangle, where φ\varphi is the scalar field in the gauge multiplet. In the Nekrasov-Shatashvili limit, such relations may be explained in terms of the available quantized Seiberg-Witten curves. Instead, the full two-parameter deformation enjoys novel features and the ring relations require non trivial additional derivative terms with respect to the modular parameter. Higher rank groups are briefly discussed emphasizing non-factorization of correlators due to the Ω\Omega-deformation. Finally, the structure of the deformed ring relations in the N=2\mathcal N=2^{\star} theory is analyzed from the point of view of the Alday-Gaiotto-Tachikawa correspondence proving consistency as well as some interesting universality properties.Comment: 36 pages. v2 references adde

    Reactive immunization on complex networks

    Full text link
    Epidemic spreading on complex networks depends on the topological structure as well as on the dynamical properties of the infection itself. Generally speaking, highly connected individuals play the role of hubs and are crucial to channel information across the network. On the other hand, static topological quantities measuring the connectivity structure are independent on the dynamical mechanisms of the infection. A natural question is therefore how to improve the topological analysis by some kind of dynamical information that may be extracted from the ongoing infection itself. In this spirit, we propose a novel vaccination scheme that exploits information from the details of the infection pattern at the moment when the vaccination strategy is applied. Numerical simulations of the infection process show that the proposed immunization strategy is effective and robust on a wide class of complex networks

    An evolutionary game model for behavioral gambit of loyalists: Global awareness and risk-aversion

    Full text link
    We study the phase diagram of a minority game where three classes of agents are present. Two types of agents play a risk-loving game that we model by the standard Snowdrift Game. The behaviour of the third type of agents is coded by {\em indifference} w.r.t. the game at all: their dynamics is designed to account for risk-aversion as an innovative behavioral gambit. From this point of view, the choice of this solitary strategy is enhanced when innovation starts, while is depressed when it becomes the majority option. This implies that the payoff matrix of the game becomes dependent on the global awareness of the agents measured by the relevance of the population of the indifferent players. The resulting dynamics is non-trivial with different kinds of phase transition depending on a few model parameters. The phase diagram is studied on regular as well as complex networks

    The relativistic Hopfield model with correlated patterns

    Get PDF
    In this work we introduce and investigate the properties of the "relativistic" Hopfield model endowed with temporally correlated patterns. First, we review the "relativistic" Hopfield model and we briefly describe the experimental evidence underlying correlation among patterns. Then, we face the study of the resulting model exploiting statistical-mechanics tools in a low-load regime. More precisely, we prove the existence of the thermodynamic limit of the related free-energy and we derive the self-consistence equations for its order parameters. These equations are solved numerically to get a phase diagram describing the performance of the system as an associative memory as a function of its intrinsic parameters (i.e., the degree of noise and of correlation among patterns). We find that, beyond the standard retrieval and ergodic phases, the relativistic system exhibits correlated and symmetric regions -- that are genuine effects of temporal correlation -- whose width is, respectively, reduced and increased with respect to the classical case.Comment: 23 pages, 6 figure

    Dreaming neural networks: rigorous results

    Full text link
    Recently a daily routine for associative neural networks has been proposed: the network Hebbian-learns during the awake state (thus behaving as a standard Hopfield model), then, during its sleep state, optimizing information storage, it consolidates pure patterns and removes spurious ones: this forces the synaptic matrix to collapse to the projector one (ultimately approaching the Kanter-Sompolinksy model). This procedure keeps the learning Hebbian-based (a biological must) but, by taking advantage of a (properly stylized) sleep phase, still reaches the maximal critical capacity (for symmetric interactions). So far this emerging picture (as well as the bulk of papers on unlearning techniques) was supported solely by mathematically-challenging routes, e.g. mainly replica-trick analysis and numerical simulations: here we rely extensively on Guerra's interpolation techniques developed for neural networks and, in particular, we extend the generalized stochastic stability approach to the case. Confining our description within the replica symmetric approximation (where the previous ones lie), the picture painted regarding this generalization (and the previously existing variations on theme) is here entirely confirmed. Further, still relying on Guerra's schemes, we develop a systematic fluctuation analysis to check where ergodicity is broken (an analysis entirely absent in previous investigations). We find that, as long as the network is awake, ergodicity is bounded by the Amit-Gutfreund-Sompolinsky critical line (as it should), but, as the network sleeps, sleeping destroys spin glass states by extending both the retrieval as well as the ergodic region: after an entire sleeping session the solely surviving regions are retrieval and ergodic ones and this allows the network to achieve the perfect retrieval regime (the number of storable patterns equals the number of neurons in the network)
    corecore